skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Guo, Qiming"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Machine unlearning is becoming increasingly important as deep models become more prevalent, particularly when there are frequent requests to remove the influence of specific training data due to privacy concerns or erroneous sensing signals. Spatial-temporal Graph Neural Networks, in particular, have been widely adopted in real-world applications that demand efficient unlearning, yet research in this area remains in its early stages. In this paper, we introduce STEPS, a framework specifically designed to address the challenges of spatio-temporal graph unlearning. Our results demonstrate that STEPS not only ensures data continuity and integrity but also significantly reduces the time required for unlearning, while minimizing the accuracy loss in the new model compared to a model with 0% unlearning. 
    more » « less
    Free, publicly-accessible full text available April 11, 2026
  2. Balzarotti, Davide; Xu, Wenyuan (Ed.)
    On-device ML is increasingly used in different applications. It brings convenience to offline tasks and avoids sending user-private data through the network. On-device ML models are valuable and may suffer from model extraction attacks from different categories. Existing studies lack a deep understanding of on-device ML model security, which creates a gap between research and practice. This paper provides a systematization approach to classify existing model extraction attacks and defenses based on different threat models. We evaluated well known research projects from existing work with real-world ML models, and discussed their reproducibility, computation complexity, and power consumption. We identified the challenges for research projects in wide adoption in practice. We also provided directions for future research in ML model extraction security. 
    more » « less